June 24, 2025
By Bob O'Donnell
In case you haven’t heard, GenAI is old news. Now, it’s all about agentic AI.
At least, that certainly seems to be the theme based on the latest announcements from the major tech industry vendors. All of them are focused on driving the story of more autonomous actions enabled by AI.
In truth, of course, there’s still a tremendous amount of activity and advancements happening in the “traditional” era of generative AI, particularly when it comes to bringing that technology into businesses and their internal IT operations. To its credit, HPE was among the first to actively discuss and demonstrate the potential for running GenAI-powered applications within the confines of a corporate data center or private cloud. At last year’s Discover event, the company unveiled Private Cloud AI, which offered organizations the potential to build solutions with their own GPU-equipped servers.
At this year’s show, HPE showed important real-world advancements in those applications but also took the next step in driving the possibility for AI-powered agents as part of these solutions. Built around a framework the company is calling GreenLake Intelligence, HPE debuted a whole set of multi-part agents that can automate numerous key IT operations across storage, networking, configuration, observability, and more. In fact, the company presented one of the most compelling and comprehensive stories regarding agent-based IT developments that I’ve seen from almost any IT vendor. Importantly, HPE made it clear that the activities are still designed to keep humans in the loop—a key point of concern given how potentially impactful the actions of any autonomous agents could be.
Building on the company’s long-established GreenLake private cloud software platform, GreenLake Intelligence leverages a series of both GenAI large language models (LLM) and traditional machine learning (ML) models that are specifically trained to deal with IT-related issues using decades worth of data that HPE has on the topic. A new chatbot-style GreenLake Copilot interface is added on top of these models, allowing IT professionals to pose questions, search for ways to resolve problems and more, all in plain English. From issues related to FinOps and spending to workload optimization, sustainability measurements, network troubleshooting and more, the idea is to reduce the enormous complexity that IT professionals now face.
All of the GreenLake Intelligence offerings start with the concept of a multi-agent orchestrator that, in turn, manages multiple other more task- or content-specific sub-agents that are capable of looking for potential issues or walking through specific tasks. The tools can then be used to provide step-by-step guidance on how to resolve any issues, perform any task, or even do them on their own in an autonomous fashion (again, with human confirmation if desired).
One good example is via the company’s Aruba Central network management application. It leverages its own version of the GreenLake Copilot interface and can not only recognize but also help resolve complex issues quickly. The stored knowledge from the AI and ML models—accessed via what HPE calls an agentic mesh—are used to power the application. Like other aspects of the GreenLake Intelligence service, it can also quickly build visual dashboards of incoming log data, making it easier to quickly spot potential issues.
Another key part of HPE’s latest offering is its new CloudOps software suite, which combines an upgraded version of its OpsRamp observability platform, Morpheus virtualization and cloud management tools, and Zerto data management and data security software. OpsRamp, in particular, features more GenAI-powered capabilities designed to see, manage, orchestrate, and fix numerous different elements within an organization’s IT infrastructure including compute, storage, and network.
Of course, it can be difficult for many organizations to figure out exactly what they need when it comes to leveraging these new types of GenAI and agentic AI-enabled offerings. To help in that regard, HPE also offers a newly upgraded CloudPhysics Plus assessment tool that organizations can use to help plan migrations to more modern hybrid and AI-powered operations.
In addition to software and services, HPE also debuted a range of new hardware solutions at Discover, including a range of new additions to its Proliant Gen12 line that use AMD’s latest Epyc CPUs. This is good news for organizations that want more choice in CPU suppliers. Building on last year’s set of Nvidia GPU-equipped servers launched as part of Private Cloud AI, HPE also announced a second generation of these AI factories, including some with the latest Nvidia Blackwell RTX Pro 6000 GPUs that were designed specifically with Enterprise AI factories in mind. Importantly, the company said that these next-generation systems can be connected to and work side-by-side with first generation models and future generations as well, allowing companies to purchase and use systems knowing that future compatibility will not be an issue. HPE also announced that its OpsRamp observability software has been certified for use on any Nvidia AI Factory-based compute system, including those built by competitors.
In storage, the company unveiled the Alletra Storage MP X10000 system, which is optimized for use with AI workloads. The all flash-based, software-defined device can be controlled via the GreenLake platform tools to help manage the large data sets required for AI workloads and can be optimized via the new GreenLake Intelligence capabilities. One of the other more interesting things that HPE is offering with the MP X10000 is built-in support for MCP (Media Context Protocol) servers that can connect and communicate with the full stack of IT components including servers, networks, observability tools, other storage components, and more. MCP is seeing tremendous traction as an industry standard to drive connections between various LLMs and digital agent platforms, so it’s good to see HPE staying ahead of the curve on it.
In fact, the announcements from Discover overall show the company is clearly focused on the future. At the same time, HPE carefully balanced the message with realities from the present, including discussions of the power and efficiency savings that companies can get by upgrading older server and other infrastructure hardware to more current versions. That may not be as sexy or compelling, but it is the reality of the mixed environments that many of its customers currently find themselves in.
Looking ahead, the agentic vision for IT that HPE presented at Discover is unquestionably compelling, but it’s important to remember that we’re still very much in the early stages of AgenticOps. For most organizations it’s likely going to be a multi-year process to reach that vision, but it’s great for companies to have a roadmap towards where they need to go.
Here’s a link to the original column: https://www.linkedin.com/pulse/hpes-greenlake-intelligence-brings-agentic-ai-bob-o-donnell-zjvcc
Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on LinkedIn at Bob O’Donnell or on Twitter @bobodtech.
|